Automating Development Workflow with Docker and Ansible
While building sunlandgin.com, I realized that maintaining a clean development environment is just as important as the code itself. I wanted a setup that was reproducible, isolated, and easy to recover if things broke. To achieve this, I moved away from installing tools directly on my laptop and architected a headless Arch Linux server managed entirely by code.
This is the breakdown of my Infrastructure as Code (IaC) workflow.
The core goal was disposability. I wanted to be able to wipe my machine and be back up and running in minutes without manually reinstalling packages. To do this, I separated my infrastructure into three distinct layers:
- The Foundation: A minimal Arch Linux host.
- The Manager: Ansible playbooks to maintain the host.
- The Workspace: Docker containers for actual development.
Layer 1: The Foundation (Arch Linux)
I chose Arch Linux for the host because of its rolling release model and lightweight footprint. I configured the disk with Btrfs, which allows me to use Snapper to take instant filesystem snapshots. If a system update breaks the server, I can roll back the entire OS to a previous state in seconds.
Layer 2: The Manager (Ansible)
Instead of SSH-ing into the server and running commands manually, I use Ansible. I maintain a playbook.yml on my laptop that acts as the “source of truth” for the server’s configuration.
When I run the playbook, Ansible connects to the server and automatically: * Updates the system (pacman -Syu). * Installs baseline tools (docker, docker-compose, aws-cli, zsh). * Configures Snapper for automatic system backups. * Optimizes Docker storage by disabling Copy-on-Write (CoW) on Btrfs volumes to prevent fragmentation. * Syncs my project files from AWS S3 to the local drive.
This ensures that my server is always in a known, perfect state.
Layer 3: The Workspace (Docker Dev Containers)
This is where the actual work happens. I don’t install Node.js, Python, or Next.js on the Arch host. Instead, every project has its own Dockerfile and docker-compose.yml.
For my sunlandgin.com project: 1. The Blueprint: A Dockerfile builds a custom Arch Linux image with zsh, neovim, and the specific Node.js version required for the app. 2. The Runner: docker-compose mounts my code into the container and forwards port 3000 to the host. 3. The Interface: I connect to the container using VS Code Remote, which allows me to edit files and run terminals inside the container as if it were my local machine.
Data Persistence & Sync
To keep my work safe, I built a custom synchronization workflow using aws s3 sync.
I wrote a master script that loops through my projects and pushes them to an S3 bucket. I implemented a filter system (similar to .gitignore) that ensures I only back up source code and configuration files, ignoring heavy artifacts like node_modules or .git folders.
Why This Setup Works
- Isolation: My Next.js dependencies never conflict with my Python data science tools. Each lives in its own container.
- Resilience: If I mess up my dev environment, I just run
docker-compose downand rebuild it. If I mess up the server, I roll back with Snapper. - Portability: I can spin up this exact environment on any machine—a cloud VPS, a Raspberry Pi, or a spare laptop—just by running my Ansible playbook.